equation 4
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
Appendices
In Equation 4, maximization is carried out over the inputy to the inverse-map, and the input z which is captured inˆp in the above optimization problem, i.e. maximization overz in Equation 4 is equivalent to choosingˆp subject to the choice of singleton/ Dirac-deltaˆp. Since Equation 4 describes a constrained optimization problem, our approach towards solving this problem in practice is via dual gradient descent. Gradient descent is used to optimize the Lagrangian of Equation 4 (with the constraintp(z) 2 modified to belogp(z) 2 as it is easy to uselogp(z)numerically for stochasticoptimization),showninEquation5. Ateachiteration,itsamplesafunction from this distribution and queries the pointx?t that greedily minimizes this function. Information Ratio Russo and Van Roy[30] related the expected regret of TS to its expected information gain i.e. the expected reduction in the entropy of the posterior distribution ofX .
Sparse Multiple Kernel Learning: Alternating Best Response and Semidefinite Relaxations
Bertsimas, Dimitris, Iglesias, Caio de Prospero, Johnson, Nicholas A. G.
We study Sparse Multiple Kernel Learning (SMKL), which is the problem of selecting a sparse convex combination of prespecified kernels for support vector binary classification. Unlike prevailing l1 regularized approaches that approximate a sparsifying penalty, we formulate the problem by imposing an explicit cardinality constraint on the kernel weights and add an l2 penalty for robustness. We solve the resulting non-convex minimax problem via an alternating best response algorithm with two subproblems: the alpha subproblem is a standard kernel SVM dual solved via LIBSVM, while the beta subproblem admits an efficient solution via the Greedy Selector and Simplex Projector algorithm. We reformulate SMKL as a mixed integer semidefinite optimization problem and derive a hierarchy of semidefinite convex relaxations which can be used to certify near-optimality of the solutions returned by our best response algorithm and also to warm start it. On ten UCI benchmarks, our method with random initialization outperforms state-of-the-art MKL approaches in out-of-sample prediction accuracy on average by 3.34 percentage points (relative to the best performing benchmark) while selecting a small number of candidate kernels in comparable runtime. With warm starting, our method outperforms the best performing benchmark's out-of-sample prediction accuracy on average by 4.05 percentage points. Our convex relaxations provide a certificate that in several cases, the solution returned by our best response algorithm is the globally optimal solution.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Wisconsin (0.04)
- (2 more...)
Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
Zheng, Huangjie, Gong, Shansan, Zhang, Ruixiang, Chen, Tianrong, Gu, Jiatao, Zhou, Mingyuan, Jaitly, Navdeep, Zhang, Yizhe
Standard discrete diffusion models treat all unobserved states identically by mapping them to an absorbing [MASK] token. This creates an 'information void' where semantic information that could be inferred from unmasked tokens is lost between denoising steps. We introduce Continuously Augmented Discrete Diffusion (CADD), a framework that augments the discrete state space with a paired diffusion in a continuous latent space. This yields graded, gradually corrupted states in which masked tokens are represented by noisy yet informative latent vectors rather than collapsed 'information voids'. At each reverse step, CADD may leverage the continuous latent as a semantic hint to guide discrete denoising. The design is clean and compatible with existing discrete diffusion training. At sampling time, the strength and choice of estimator for the continuous latent vector enables a controlled trade-off between mode-coverage (generating diverse outputs) and mode-seeking (generating contextually precise outputs) behaviors. Empirically, we demonstrate CADD improves generative quality over mask-based diffusion across text generation, image synthesis, and code modeling, with consistent gains on both qualitative and quantitative metrics against strong discrete baselines.
- Europe > Austria > Vienna (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (2 more...)
- Research Report (0.64)
- Workflow (0.45)